Heuristics [] by /

Etymology

dall-e%203%3A%20%22illustration%20on%20black%20background%20of%20archimedes%20discovering%20the%20physical%20law%20and%20crying%20with%20face%20full%20of%20enthusiasm%20the%20word%20EUREKA%20(word%20should%20be%20depicted%20in%20the%20comics-like%20bubble%22

dall-e 3: "illustration on black background of archimedes discovering the physical law and crying with face full of enthusiasm the word EUREKA (word should be depicted in the comics-like bubble"

The term "heuristics" comes from the Greek word "heuriskein," which means "to find" or "to discover." This term reflects the idea of finding or discovering solutions through intuitive or trial-and-error methods.

Mathematico-logical heuristics

Mathematico-logical heuristics involve using structured mathematical or logical methods to solve problems. They include techniques like calculus for optimizing functions, linear programming for maximizing or minimizing linear objectives under constraints, and 3-SAT for solving complex logical puzzles. These heuristics apply rigorous mathematical rules and logic to break down and solve problems step-by-step. They are especially useful for structured problems where precise, logical solutions are needed, like in operations research, computer science, and engineering.

Calculus

Calculus-based heuristics, like Newton's method or gradient descent, use principles of calculus to find solutions to complex problems. They involve calculating slopes or gradients to understand how a function changes. For instance, gradient descent moves towards the lowest point of a function, similar to finding the bottom of a valley - this point is often the best solution. These methods are powerful for optimizing problems, like in machine learning or economics, where finding the most efficient point is crucial.

Newton's method

Newton's Method is a calculus-based technique to find the roots of a function, where the function equals zero. It starts with a guess and repeatedly applies a formula to get closer to the root. It uses the function's slope (derivative) to improve each guess. It's effective for smooth, continuous functions but can fail for functions with no derivative, flat slopes, or multiple roots near each other. It's great for precise, math-heavy problems but not for erratic or non-differentiable functions.

Gradient descent

Gradient Descent is a method used to find the minimum of a function. Imagine walking downhill towards the lowest point in a valley—that's what this method does mathematically. It calculates the gradient (the slope) of the function and takes steps in the direction that decreases the function's value. It's powerful for optimizing in machine learning and economics. However, it struggles with functions having many valleys (local minima) or plateaus, and might not find the absolute lowest point (global minimum).

Linear Programming

Linear Programming (LP) is a mathematical method used to find the best outcome in a model whose requirements are represented by linear relationships. It's like playing a game where you need to achieve the highest score (maximize) or the lowest score (minimize) under certain rules. These rules are your constraints, like how much money you can spend or how many hours you have. The score you're trying to optimize is called the objective function, and it's also a linear equation. LP helps you figure out the best way to play this game, balancing all the rules, to achieve your goal, whether it's making the most profit, using the least resources, or something similar. It's a powerful tool for decision-making in business, engineering, economics, and more.

3-SAT Problem

The 3-SAT (3-Satisfiability) problem is a classic question in computer science and mathematical logic. It's a specific type of Boolean satisfiability problem. In 3-SAT, you're given a formula composed of several clauses, where each clause is a disjunction (an "OR" operation) of exactly three literals (variables or their negations). The challenge is to determine if there exists an assignment of truth values (true or false) to the variables that makes the entire formula true. This problem is known for being NP-complete, meaning it's easy to check a solution but potentially very hard to find one, especially as the number of variables increases. This characteristic makes 3-SAT important in theoretical computer science, particularly in studies related to computational complexity.

Nature-inspired heuristics

Nature-inspired heuristics are problem-solving methods modeled after natural processes. Like how birds flock or bees forage, these algorithms mimic nature to tackle complex problems. They use strategies like evolution, ant colony behavior, or bird flocking to find good solutions, blending randomness with specific rules from nature. These methods are useful for tough problems where traditional approaches might fail, creatively applying nature's wisdom to areas like computer science, engineering, and logistics to find efficient, often surprising, solutions.

Simulated annealing

Simulated Annealing is a technique for finding good solutions to tough problems. It's like trying different temperatures to shape a metal perfectly. At first, it allows big changes (high temperature), even if they seem wrong, to explore different options. Gradually, it cools down (lowers the temperature), making smaller, more careful adjustments. This process helps avoid getting stuck with a not-so-great solution (local optimum) and aims for a better one, although it might not always be the best possible (global optimum).

Ant colony optimization

Ant Colony Optimization is inspired by how real ants find the shortest paths to food. In this method, virtual ants roam through possible solutions, leaving pheromone trails. Stronger trails attract more ants, suggesting better solutions. Over time, the best paths have more pheromones, guiding ants towards them. It's great for complex problems like routing or scheduling, where finding the best route matters. It's like following a trail to the best answer based on the collective wisdom of many trials.

Evolutionary optimization

Evolutionary Optimization mimics natural selection, like how animals evolve. Imagine a population of potential solutions. Those fitting the problem best (like the fittest animals) are chosen to 'reproduce.' They mix and mutate to create new solutions or 'offspring.' Over generations, this process 'evolves' better solutions, as unfit ones are discarded. It's a trial-and-error method using principles of evolution, effectively finding good solutions for complex problems by simulating survival of the fittest in a virtual environment.

Genetic Algorithm

A Genetic Algorithm is a method in evolutionary optimization that solves problems by mimicking natural evolution. Imagine a survival contest where each participant (solution) has traits (parameters). These solutions breed and mutate, creating new generations. The fittest solutions, judged by a fitness function, survive to breed again. Over time, this process 'evolves' increasingly effective solutions. It's like nature's trial-and-error but used for complex problems like route planning, where finding the best or a good-enough solution is essential.

Selection

In evolutionary optimization, selection is like a survival test for candidate solutions, deciding which ones get to 'reproduce.' Selection operators are the rules determining who passes this test. They might choose the fittest solutions (those solving the problem best) or sometimes include random or less fit ones for diversity. 

Variation

In evolutionary optimization, variation is the process of introducing diversity into the population of solutions. Like genetic mutations and breeding in nature, it involves altering the 'genes' (parameters) of candidate solutions to create new, different ones. This can be done through mutation (changing some parameters) or crossover (mixing parameters from two solutions). Variation is crucial for exploring new solutions and avoiding getting stuck with suboptimal ones, much like how biological diversity is key to the survival and evolution of species.

Replication

In evolutionary optimization, replication is like making copies of the best solutions. Imagine a survival contest where top performers are cloned. These copies then undergo changes (mutations) or combine features (crossover) to create new solutions. Replication ensures good traits are passed on, increasing chances that future generations will perform even better. It's used in algorithms to solve complex problems, where keeping and tweaking successful solutions gradually leads to finding the best or a very good answer.

Genetic programming

Genetic Programming (GP) is a type of evolutionary optimization where programs themselves evolve to solve problems. Imagine a computer automatically writing and modifying its own code to get better at a task. In GP, each 'individual' is a computer program. These programs are tested for their ability to solve a problem, and the best ones are modified (mutated) or combined (crossed over) to create new programs. Over generations, this process evolves programs that become increasingly effective at the task.

Grammatical evolution

Grammar Evolution is a type of evolutionary optimization where solutions are generated using a predefined set of rules, like a grammar in language. Imagine creating sentences using grammar rules, but here, the 'sentences' are computer programs or formulas. Each solution must follow these rules, ensuring they make sense. Like in genetic programming, these solutions evolve over generations, becoming better at solving a problem. It's particularly useful when solutions need a specific structure or format, allowing for complex, yet orderly, evolution.

Caching

Caching is a technique used in computing to store frequently accessed data in a readily available location for quick retrieval. It's like having a small, fast memory space for storing the most commonly used information, reducing the need to repeatedly access slower storage areas. In problem-solving, caching is crucial for enhancing performance and efficiency. By remembering previously computed results, systems can avoid redoing complex calculations, significantly speeding up the process of solving repetitive or similar problems. This approach is widely used in software development, web browsing, and data processing.

Human heuristics

Human heuristics are simple, intuitive rules we use to make quick decisions, like "avoid dark alleys at night." They are based on our experiences and common sense, helping us navigate everyday choices efficiently without much thought. Useful in fast-paced or uncertain situations, these shortcuts can lead to good enough decisions. However, they're not always reliable for complex, critical decisions or in unfamiliar contexts, as they can oversimplify situations and be influenced by biases, potentially leading to poor choices.

Trial and Error

Trial and Error

Swarm intelligence

Imitation

Memory

Repetition

Repetition

Game & Playfulness

Game & Playfulness

Curiosity

Curiosity

Intuition

Intuition